言語機能
Language
P2-2-187
ERPによる日本語文での知識と意味のミスマッチ分析
An ERP Analysis of the World-Sense and Semantics Mismatches in written and spoken Japanese

○小田垣佑1, サクリアニサクティ1, 戸田知基1, グラムニュービッグ1, 中村哲1
○Yu Odagaki1, Sakti Sakriani1, Tomoki Toda1, Neubig Graham1, Satoshi Nakamura1
奈良先端科学技術大学院大学 情報科学研究科1
Graduate School of Information Science, NAIST, Nara1

Now that cognitive process of Language gradually revealed by comprehensive study, we have a chance to detect mismatch feelings by measuring brain activity. The effect of mismatched words on the sentence comprehension process has been studied for Western languages(Kutas et al., 1980). In a previous study, if a sentence has a word that violates semantic or world knowledge violation word, N400 ERP component appears during the reading of the sentence (Hagort et al., 2004). This paper presents in this paper our research using electroencephalography (EEG) to examine whether the process of finding incorrect words due to semantic or world knowledge violation in a written text and a spoken utterance. In these experiments, we presented three kinds of sentences as visual stimuli. The first ones were completely correct sentences, the next ones included a word that violated world knowledge, and the last ones included a word that violated semantics. Each sentence was split into segments. In written stimuli experiment, each segment was shown to participants for 0.5 seconds with 0.5 second intervals while in spoken stimuli experiment, each segment was presented by speaker at 0.5 second interval and 1 second intervals between the sentences. As a result, a negative shift appeared after words that violated world knowledge and semantics. According to the previous study, the peak of this negative shift was shown to occur at about 400 ms which indicates the N400 component. However, the peak shown in written stimuli of this study occurred at about 350 ms in the parietal area (Cz). In the spoken utterance version of the experiment, N400 ERP component was observed after semantic violation words. However, negative shift after world knowledge violation word was much later than the semantic word.
P2-2-188
言語野において音読と黙読は同じボクセルパターンを発生させる
Overt and covert speech evoke similar voxel patterns in language areas

○池田純起1, 柴田智広1, 池田和司1
○Shigeyuki Ikeda1, Tomohiro Shibata1, Kazushi Ikeda1
奈良先端科学技術大学院大学 情報科学研究科1
Graduate School of Information Science, Nara Institute of Science and Technology (NAIST)1

Distributions of the cortical activity evoked by speech have been intensively investigated by such a neuroimaging technique as fMRI and PET, and it has been found that the distributions for overt speech and covert speech are approximately overlapped. It is, however, unclear whether their voxel patterns presumably related to representations or functions for speech are similar or not in language areas, which are BA 1/2/3, BA 4, BA 6, BA 22, BA 41/42, BA 44. Here we show that decoding syllables as multi-voxel pattern analysis using fMRI suggests they are actually similar. Nine healthy Japanese subjects (8 males, 1 females, right-handed) were asked to aloud or silently read twenty-five Japanese syllables. In the language areas, the similarities between overt and covert speech were calculated by syllable discrimination method in which one-nearest neighbor classifier is applied to linear correlation coefficients between the voxel patterns of overt and covert speech. The similarities were tested by the binomial test (FDR of q ⁢ 0.01, chance = 4 %). As a result, all the language areas showed the significant similarities between overt speech and covert speech. These results would suggest that representation or functions for overt speech and covert speech are similar. Additionally, in BA 44 and BA 41/42, the similarities of the areas in left hemisphere were lower than that of the same areas in right hemisphere, which is congruent with language dominance.
P2-2-189
マカクザルにおける統計的単語分割に関する神経相関の特定
Identifying neural correlates for statistical segmentation of tone sequences in macaque monkeys

○田村潤1, 久保孝富1長坂泰勇2, 大杉直也2, 池田和司1, 藤井直敬2
○Jun Tamura1, Takatomi Kubo1, Ivana Wongwajarachot2, Yasuo Nagasaka2, Naoya Osugi2, Kazushi Ikeda1, Naotaka Fujii2
奈良先端大・情報科学研究科1, 理研BSI・適応知性研究チーム2
Graduate School of Information Science, NAIST, Nara1, Lab for Adaptive Intelligence, BSI, RIKEN2

To acquire a language, human beings must learn word segmentation from continuous tone sequence. One information source of the word boundaries is transitional probability between adjacent syllables. It has been reported that human infants can learn word segmentation based on transitional probability. In addition, in EEG study with human adults, ERPs showed that word onset elicited larger amplitude than those of non-onset. On the other hand, it is not yet well understood whether other animals can learn statistical segmentation and possess any neural correlates of it. In this study, we aimed to identify the neural correlates of word segmentation in the macaque monkeys by decoding the information about segments of artificial language from multichannel electrocorticogram signals. For this identification, we utilized feature selection method in machine learning technique with frequency domain features. Our results show that decoding the information of segmentation can be achieved with statistically significant accuracy. In addition, features tend to be selected relatively frequently from superior temporal gyri and prefrontal cortices with respect to electrode location and from gamma band with respect to frequency band. These results suggest that the macaque monkeys possess a mechanism that makes statistical segmentation possible.
P2-2-190
複文理解における階層構造処理の皮質メカニズム:fMRI研究
The cortical mechanisms for the processing of hierarchical structure in comprehension of complex sentences: An fMRI study

○岩渕俊樹1, 乾敏郎1, 小川健二2
○Toshiki Iwabuchi1, Toshio Inui1, Kenji Ogawa2
京都大院・情報学1, ATR認知機構研究所2
Grad Sch Inform, Kyoto Univ, Kyoto, Japan1, ATR-CMC, Kyoto, Japan2

The aim of this study was to investigate the cortical mechanisms for comprehending complex sentences, including relative clause structure, and we performed a functional magnetic resonance imaging (fMRI) experiment. The participants (N = 17) in this fMRI experiment read a center-embedded, a left-branching, or a coordinated sentence in each trial. These complex sentences were divided into six segments and each segment was presented visually in a segment-by-segment fashion. Each segment was displayed for 1.7 s. Participants were then required to read a simple sentence probe and to indicate, by pressing a button, whether the meaning of the probe was consistent with the previous complex sentence. At first, we compared the brain activity during the complex sentence presentation (10.2 s) between conditions. Understanding a center-embedded sentence is considered to impose more syntactic demands than a left-branching or coordinated sentence. We found that the left inferior frontal sulcus (IFS), left superior precentral sulcus (SPCS), left temporoparietal junction (TPJ), right SPCS, and precuneus were more activated (p <.05 corrected at cluster-level), when comparing the center-embedded condition to the left-branching and the coordinated conditions. We then performed a dynamic causal modeling (DCM) analysis, and found that the left IFS, left SPCS, and left TPJ interacted with each other during experimental sessions. This indicates that the cortical network consisting of these regions was important for the processing of hierarchical syntactic structure in language.
P2-2-191
音声の認識における自己と他者
Voice recognition of self versus others

○徐鳴鏑1, 保前文高1, 橋本龍一郎1, 萩原裕子1
○Mingdi Xu1, Fumitaka Homae1, Ryuichiro Hashimoto1, Hiroko Hagiwara1
首都大学東京人文科学研究科言語科学1
Department of Language Sciences, Graduate School of Humanities, Tokyo Metropolitan University, Tokyo, Japan1

Human voices convey information of speaker and content. Various aspects of acoustical characteristics, such as the fundamental frequency (F0) and formant structures, have been considered to serve as the cues for voice recognition. Here, we aim to determine the contributions of F0 and frequency bands in cuing the distinction between self-voice recognition (SVR) and others-voice recognition (OVR), and hypothesize that voice recognition may involve the sensorimotor system, i.e., listeners may recognize self- and others-voice by matching the auditory inputs with the intended articulatory gestures of the speaker. In the experiment, 30 Japanese subjects, separated into six 5-persons groups, were asked to listen to recorded 3-morae Japanese words and identify the speaker, who will be either themselves or one of the four peers of them. We created 15 variations for a specific word read by a specific person by manipulating two factors: the F0 (5 levels: -4, -2, 0, +2 and +4 semitones) and the frequency bands (3 levels: NORMAL, LOW and HIGH conditions). For NORMAL, no manipulation was applied; for LOW and HIGH, either lower or higher frequencies were retained using a cutoff frequency of the mean of the 2nd and 3rd formants (F2 and F3). The results showed that under NORMAL and LOW, the more severe the distortion in F0, the lower the accuracy, but there was nearly no difference between the accuracies of SVR and OVR. By contrast, under HIGH, F0 scarcely affected the accuracy, and importantly, the accuracy of SVR was significantly higher than that of OVR (p<0.05). Therefore, we presume that 1) both F0 and frequency information lower than F3, which outlines the characteristics of vowels, are essential not only in SVR but in OVR; 2) the motor representation of speech may be richer for self- than for others-voice, which plays a critical role in voice recognition, especially under challenging situations as in HIGH, where one has to specify the speaker mainly by consonantal features.

上部に戻る 前に戻る